Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 116
Filter
Add filters

Document Type
Year range
1.
IEEE Access ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-20243873

ABSTRACT

As intelligent driving vehicles came out of concept into people’s life, the combination of safe driving and artificial intelligence becomes the new direction of future transportation development. Autonomous driving technology is developing based on control algorithms and model recognitions. In this paper, a cloud-based interconnected multi-sensor fusion autonomous vehicle system is proposed that uses deep learning (YOLOv4) and improved ORB algorithms to identify pedestrians, vehicles, and various traffic signs. A cloud-based interactive system is built to enable vehicle owners to master the situation of their vehicles at any time. In order to meet multiple application of automatic driving vehicles, the environment perception technology of multi-sensor fusion processing has broadened the uses of automatic driving vehicles by being equipped with automatic speech recognition (ASR), vehicle following mode and road patrol mode. These functions enable automatic driving to be used in applications such as agricultural irrigation, road firefighting and contactless delivery under new coronavirus outbreaks. Finally, using the embedded system equipment, an intelligent car was built for experimental verification, and the overall recognition accuracy of the system was over 96%. Author

2.
2nd International Conference on Sustainable Computing and Data Communication Systems, ICSCDS 2023 ; : 1613-1617, 2023.
Article in English | Scopus | ID: covidwho-2321935

ABSTRACT

A smart home is a component of the Internet of Things (IoT) technology implementations that help people with their daily activities. To link devices to the Internet of Things, a variety of communication methods can be used. Impairments restrict the activities that disabled people can participate in. This paper proposes an automation system that enables disabled people to control televisions (TVs), lights, and fans, any other electrical devices at home, using just voice commands without moving. The Google Assistant feature for mobile phones is used to achieve voice recognition on electronic components. This system also contains the concept of human temperature measurement where the temperature sensor, fixed to the door, checks the temperature of the person and opens when it is normal. This prevents the user from getting infected by the illness, keeping in mind the present situation of covid19. © 2023 IEEE.

3.
2nd International Conference on Sustainable Computing and Data Communication Systems, ICSCDS 2023 ; : 770-773, 2023.
Article in English | Scopus | ID: covidwho-2325493

ABSTRACT

Though many facial emotion recognition models exist, after the Covid-19 pandemic, majority of such algorithms are rendered obsolete as everybody is compelled to wear a facemask to protect themselves against the deadly virus. Face masks can hinder emotion recognition systems, as crucial facial features are not visible in the image. This is because facemasks cover essential parts of the face such as the mouth, nose, and cheeks which play an important role in differentiating between various emotions. This study intends to recognize the emotional states of anger-disgust, neutral, surprise-fear, joy, sadness, of the person in the image with a face mask. In the proposed method, a CNN model is trained using images of people wearing masks. To achieve higher accuracy, the classes in the dataset are combined. Different combinations of clubbing are performed, and results are recorded. Images are taken from FER2013 dataset which consists of a huge number of manually annotated facial images of people. © 2023 IEEE.

4.
i-Manager's Journal on Computer Science ; 11(1):26-37, 2023.
Article in English | ProQuest Central | ID: covidwho-2325471

ABSTRACT

Communication has been a struggle for everyone since the covid outbreak and in the aftermath, people have had to get accustomed to video conferencing applications. However people with physical or mental limitations are still unable to use video conferencing apps and their interfaces. This necessitates the development of web-based video chat applications. These applications can aid those who are unable to communicate verbally and/or operate using standard mouse and keyboard inputs, but yet need to feel close to others when they are apart. The proposed application incorporates various accessibility features such as speech-to-text and text-to-speech, gaze tracking and pictorial speech interfaces. It enables individuals with disabilities to participate in virtual meetings on an equal footing with their peers. The goal is to remove barriers and promote inclusiveness in remote work and collaboration for all users, regardless of their abilities using this application.

5.
Applied Sciences (Switzerland) ; 13(8), 2023.
Article in English | Scopus | ID: covidwho-2318300

ABSTRACT

Traditional learning has faced major changes due to the COVID-19 pandemic, highlighting the necessity for innovative education methods. Virtual reality (VR) technology has the potential to change teaching and learning paradigms by providing a gamified, immersive, and engaging education. The purpose of this study is to evaluate the impact of virtual reality in academic context by using a VR software instrument (called EduAssistant). The system's features such as virtual amphitheater, search by voice recognition, whiteboard, and a video conference system have fostered a sense of connection and community interaction. The study involved 117 students for VR experience, out of which 97 watched a pre-recorded video and 20 students used the VR headset, and an additional 20 students for traditional learning. The students who used the VR headset achieved a significantly higher mean quiz score of 8.31 compared to 7.55 for the traditional learning group with a two-tailed p-value of 0.0468. Over 80% of the total number of participants were satisfied (4 or 5 out of 5) with the experience and the confidence level when searching through voice recognition was over 90%. The study demonstrates that virtual reality is an excellent approach for changing conventional education. The research results, based on samples, simulations, and surveys, revealed a positive impact of VR and its gamification methods on the students' cognitive performance, engagement, and learning experience. Immersion provided by a virtual assistant tool helped to promote active and deep learning. Experiments based on EduAssistant features suggest that virtual reality is also an effective strategy for future research related to students with disabilities. © 2023 by the authors.

6.
Advanced Intelligent Systems ; 2023.
Article in English | Web of Science | ID: covidwho-2309600

ABSTRACT

Rapid advances in wearable sensing technology have demonstrated unprecedented opportunities for artificial intelligence. In comparison with the traditional hand-held electrolarynx, a wearable and intelligent artificial throat with sound-sensing ability is a more comfortable and versatile method to assist disabled people with communication. Herein, a piezoresistive sensor with a novel configuration is demonstrated, which consists of polystyrene (PS) spheres as microstructures sandwiched between silver nanowires and reduced graphene oxide layers. In fact, changes in the device's conducting patterns are obtained by spay-coating the various weight ratios and sizes of the PS microspheres, which is a fast and convenient way to establish microstructures for improving sensitivity. The wearable artificial throat device also exhibits high sensitivity, fast response time, and ultralow intensity level detection. Moreover, the device's excellent mechanical-electrical performance allows it to detect subtle throat vibrations that can be converted into controllable sounds. In this case, an intelligent artificial throat is achieved by combining a deep learning algorithm with a highly flexible piezoresistive sensor to successfully recognize five different words (help, sick, patient, doctor, and COVID) with an accuracy exceeding 96%. Herein, new opportunities in voice control as well as other human-machine interface applications are opened.

7.
Journal of Neurology, Neurosurgery and Psychiatry ; 93(9):30, 2022.
Article in English | EMBASE | ID: covidwho-2292109

ABSTRACT

Introduction Over 50% of stroke survivors have cognitive impairment. National guidelines promote early cognitive testing however, current pen-and-paper based tests are not always appropriate, typically take place in hospital and are time costly for busy clinicians. This project aimed to create an easy-to-use cognitive assessment tool specifically designed for the needs of stroke survivors. We used a computerised doctor utilising automatic speech recognition and machine learning. Methods Patients are approached if they pass the eligibility criteria of having recent acute stroke/TIA, and do not have preexisting condition i.e dementia, severe aphasia Participants could speak to the digital doctor on the ward or at home via a web-version. Results Recruitment started on 8th December 2020;We have screened 614 people assessed for suspected acute stroke/TIA at Sheffield Teaching Hospitals. Of those we have recruited 71 participants (13 with TIA) Mean NIHSS of 4.5 and mean MoCA of 24.6. We will present initial results of factors affecting participant recruitment. We will also compare the mood and anxiety screening scores used in this study to those collected via the SNAPP database. Discussion Screening was adapted due to Covid pandemic and utilising remote consent and participa- tion allowed the project to continue.

8.
2nd International Conference for Advancement in Technology, ICONAT 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2291909

ABSTRACT

The COVID-19 pandemic has become the prime reason for organizations across the world to shift their entire workforce onto virtual platforms. One of the major drawbacks of these virtual platforms is that it lacks a real-time metric which could be used to detect whether a person is attentive during the lectures and meetings or not. This was most evident in the case of educational institutions, where students would often fail to pay attention to the content that was being taught by teachers and professors at home. With this research work, our aim is to create a solution for this problem with the help of AI-FER (Artificial Intelligence Facial Emotion Recognition). For this, we have proposed our own Convolutional Neural Network model achieving an overall accuracy of 59.03%. We have also used several pre-trained models available in Google's Tensorflow library like DenseNET and VGG. © 2023 IEEE.

9.
7th International Conference on Computing Methodologies and Communication, ICCMC 2023 ; : 399-404, 2023.
Article in English | Scopus | ID: covidwho-2291873

ABSTRACT

The COVID-19 pandemic has affected healthcare in several ways. Some patients were unable to make it to appointments due to curfews, transportation restrictions, and stay-at-home directives, while less urgent procedures were postponed or cancelled. Others steered clear of hospitals out of fear of contracting an infection. With the use of a conversational artificial intelligence-based program, the Talking Health Care Bot (THCB) could be useful during the pandemic by allowing patients to receive supportive care without physically visiting a hospital. Therefore, the THCB will drastically and quickly change in-person care to patient consultation through the internet. To give patients free primary healthcare and to narrow the supply-demand gap for human healthcare professionals, this work created a conversational bot based on artificial intelligence and machine learning. The study proposes a revolutionary computer program that serves as a patient's personal virtual doctor. The program was carefully created and thoroughly trained to communicate with patients as if they were real people. Based on a serverless architecture, this application predicts the disease based on the symptoms of the patients. A Talking Healthcare chatbot confronts several challenges, but the user's accent is by far the most challenging. This study has then evaluated the proposed model by using one hundred different voices and symptoms, achieving an accuracy rate of 77%. © 2023 IEEE.

10.
IEEE/ACM Transactions on Audio Speech and Language Processing ; : 1-14, 2023.
Article in English | Scopus | ID: covidwho-2306621

ABSTRACT

The coronavirus disease 2019 (COVID-19) pandemic has drastically impacted life around the globe. As life returns to pre-pandemic routines, COVID-19 testing has become a key component, assuring that travellers and citizens are free from the disease. Conventional tests can be expensive, time-consuming (results can take up to 48h), and require laboratory testing. Rapid antigen testing, in turn, can generate results within 15-30 minutes and can be done at home, but research shows they achieve very poor sensitivity rates. In this paper, we propose an alternative test based on speech signals recorded at home with a portable device. It has been well-documented that the virus affects many of the speech production systems (e.g., lungs, larynx, and articulators). As such, we propose the use of new modulation spectral features and linear prediction analysis to characterize these changes and design a two-stage COVID-19 prediction system by fusing the proposed features. Experiments with three COVID-19 speech datasets (CSS, DiCOVA2, and Cambridge subset) show that the two-stage feature fusion system outperforms the benchmark systems of CSS and Cambridge datasets while maintaining lower complexity compared to DL-based systems. Furthermore, the two-stage system demonstrates higher generalizability to unseen conditions in a cross-dataset testing evaluation scheme. The generalizability and interpretability of our proposed system demonstrate the potential for accessible, low-cost, at-home COVID-19 testing. IEEE

11.
Applied Sciences ; 13(8):4748, 2023.
Article in English | ProQuest Central | ID: covidwho-2304179

ABSTRACT

Traditional learning has faced major changes due to the COVID-19 pandemic, highlighting the necessity for innovative education methods. Virtual reality (VR) technology has the potential to change teaching and learning paradigms by providing a gamified, immersive, and engaging education. The purpose of this study is to evaluate the impact of virtual reality in academic context by using a VR software instrument (called EduAssistant). The system's features such as virtual amphitheater, search by voice recognition, whiteboard, and a video conference system have fostered a sense of connection and community interaction. The study involved 117 students for VR experience, out of which 97 watched a pre-recorded video and 20 students used the VR headset, and an additional 20 students for traditional learning. The students who used the VR headset achieved a significantly higher mean quiz score of 8.31 compared to 7.55 for the traditional learning group with a two-tailed p-value of 0.0468. Over 80% of the total number of participants were satisfied (4 or 5 out of 5) with the experience and the confidence level when searching through voice recognition was over 90%. The study demonstrates that virtual reality is an excellent approach for changing conventional education. The research results, based on samples, simulations, and surveys, revealed a positive impact of VR and its gamification methods on the students' cognitive performance, engagement, and learning experience. Immersion provided by a virtual assistant tool helped to promote active and deep learning. Experiments based on EduAssistant features suggest that virtual reality is also an effective strategy for future research related to students with disabilities.

12.
IEEE Access ; 11:30575-30590, 2023.
Article in English | Scopus | ID: covidwho-2301709

ABSTRACT

Social networks and other digital media deal with huge amounts of user-generated contents where hate speech has become a problematic more and more relevant. A great effort has been made to develop automatic tools for its analysis and moderation, at least in its most threatening forms, such as in violent acts against people and groups protected by law. One limitation of current approaches to automatic hate speech detection is the lack of context. The spotlight on isolated messages, without considering any type of conversational context or even the topic being discussed, severely restricts the available information to determine whether a post on a social network should be tagged as hateful or not. In this work, we assess the impact of adding contextual information to the hate speech detection task. We specifically study a subdomain of Twitter data consisting of replies to digital newspapers posts, which provides a natural environment for contextualized hate speech detection. We built a new corpus in Spanish (Rioplatense variant) focused on hate speech associated to the COVID-19 pandemic, annotated using guidelines carefully designed by our interdisciplinary team. Our classification experiments using state-of-the-art transformer-based machine learning techniques show evidence that adding contextual information improves the performance of hate speech detection for two proposed tasks: binary and multi-label prediction, increasing their Macro F1 by 4.2 and 5.5 points, respectively. These results highlight the importance of using contextual information in hate speech detection. Our code, models, and corpus has been made available for further research. © 2013 IEEE.

13.
Dissertation Abstracts International: Section B: The Sciences and Engineering ; 84(6-B):No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-2301457

ABSTRACT

Interacting with computer systems with speech is more natural than conventional interaction methods. It is also more accessible since it does not require precise selection of small targets or rely entirely on visual elements like virtual keys and buttons. Speech also enables contactless interaction, which is of particular interest when touching public devices is to be avoided, such as the recent COVID-19 pandemic situation. However, speech is unreliable in noisy places and can compromise users' privacy and security when in public. Image-based silent speech, which primarily converts tongue and lip movements into text, can mitigate many of these challenges. Since it does not rely on acoustic features, users can silently speak without vocalizing the words. It has also been demonstrated as a promising input method on mobile devices and has been explored for a variety of audiences and contexts where the acoustic signal is unavailable (e.g., people with speech disorders) or unreliable (e.g., noisy environment). Though the method shows promise, very little is known about peoples' perceptions regarding using it, their anticipated performance of silent speech input, and their approach to avoiding potential misrecognition errors. Besides, existing silent speech recognition models are slow and error prone, or use stationary, external devices that are not scalable. In this dissertation, we attempt to address these issues. Towards this, we first conduct a user study to explore users' attitudes towards silent speech with a particular focus on social acceptance. Results show that people perceive silent speech as more socially acceptable than speech input but are concerned about input recognition, privacy, and security issues. We then conduct a second study examining users' error tolerance with speech and silent speech input methods. Results reveal that users are willing to tolerate more errors with silent speech input than speech input as it offers a higher degree of privacy and security. We conduct another study to identify a suitable method for providing real-time feedback on silent speech input. Results show that users find an feedback method effective and significantly more private and secure than a commonly used video feedback method. In light of these findings, which establish silent speech as an acceptable and desirable mode of interaction, we take a step forward to address the technological limitations of existing image-based silent speech recognition models to make them more usable and reliable on computer systems. Towards this, first, we develop LipType, an optimized version of LipNet for improved speed and accuracy. We then develop an independent repair model that processes video input for poor lighting conditions, when applicable, and corrects potential errors in output for increased accuracy. We then test this model with LipType and other speech and silent speech recognizers to demonstrate its effectiveness. In an evaluation, the model reduced word error rate by 57% compared to the state-of-the-art without compromising the overall computation time. However, we identify that the model is still susceptible to failure due to the variability of user characteristics. A person's speaking rate, for instance, is a fundamental user characteristic that can influence speech recognition performance due to the variation in acoustic properties of human speech production. We formally investigate the effects of speaking rate on silent speech recognition. Results revealed that native users speak about 8% faster than non-native users, but both groups slow down at comparable rates (34-40%) when interacting with silent speech, mostly to increase its accuracy rates. A follow-up experiment confirms that slowing down does improve the accuracy of silent speech recognition. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

14.
ACM Transactions on Asian and Low-Resource Language Information Processing ; 21(5), 2022.
Article in English | Scopus | ID: covidwho-2299916

ABSTRACT

Emotions, the building blocks of the human intellect, play a vital role in Artificial Intelligence (AI). For a robust AI-based machine, it is important that the machine understands human emotions. COVID-19 has introduced the world to no-touch intelligent systems. With an influx of users, it is critical to create devices that can communicate in a local dialect. A multilingual system is required in countries like India, which has a large population and a diverse range of languages. Given the importance of multilingual emotion recognition, this research introduces BERIS, an Indian language emotion detection system. From the Indian sound recording, BERIS estimates both acoustic and textual characteristics. To extract the textual features, we used Multilingual Bidirectional Encoder Representations from Transformers. For acoustics, BERIS computes the Mel Frequency Cepstral Coefficients and Linear Prediction coefficients, and Pitch. The features extracted are merged in a linear array. Since the dialogues are of varied lengths, the data are normalized to have arrays of equal length. Finally, we split the data into training and validated set to construct a predictive model. The model can predict emotions from the new input. On all the datasets presented, quantitative and qualitative evaluations show that the proposed algorithm outperforms state-of-the-art approaches. © 2022 Association for Computing Machinery.

15.
2023 International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering, ICECONF 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2297172

ABSTRACT

This research endeavor is focused on identifying patients with the Covid-19 virus via the use of a novel voice recognition technique that makes use of a Support Vector Machine (abbreviated as 'SVM') and compares its accuracy with that of 'K-Nearest Neighbor' (abbreviated as 'KNN'). When it comes to speech recognition, the SVM method is regarded to be group 1, and the KNN method is considered to be group 2, and both groups have a total of 20 samples. The outcomes of these data were analyzed using statistical analysis using a'independent sample T-test,' which has a margin of error of 5% and a pretest power of 80%. At a significance of 0.042 (p 0.05), KNN obtains an accuracy of 87.5% whereas SVM achieves an accuracy of 96.5%. As compared to KNN, the prediction accuracy of Covid-19 employing SVM in novel voice recognition achieves much higher levels of accuracy. © 2023 IEEE.

16.
Lecture Notes in Networks and Systems ; 551:579-589, 2023.
Article in English | Scopus | ID: covidwho-2296254

ABSTRACT

E-learning system advancements give students new opportunities to better their academic performance and access e-learning education. Because it provides benefits over traditional learning, e-learning is becoming more popular. The coronavirus disease pandemic situation has caused educational institution cancelations all across the world. Around all over the world, more than a billion students are not attending educational institutions. As a result, learning criteria have taken on significant growth in e-learning, such as online and digital platform-based instruction. This study focuses on this issue and provides learners with a facial emotion recognition model. The CNN model is trained to assess images and detect facial expressions. This research is working on an approach that can see real-time facial emotions by demonstrating students' expressions. The phases of our technique are face detection using Haar cascades and emotion identification using CNN with classification on the FER 2013 datasets with seven different emotions. This research is showing real-time facial expression recognition and help teachers adapt their presentations to their student's emotional state. As a result, this research detects that emotions' mood achieves 62% accuracy, higher than the state-of-the-art accuracy while requiring less processing. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

17.
3rd International Symposium on Instrumentation, Control, Artificial Intelligence, and Robotics, ICA-SYMP 2023 ; : 127-130, 2023.
Article in English | Scopus | ID: covidwho-2275520

ABSTRACT

One of the difficult challenges in AI development is to make machine understand the human feeling through expression because human can express feeling in various ways, for example, through voices, facial actions or behaviors. Facial Emotion Recognition (FER) has been used in interrogating suspects and being a tool to help detect emotions in people with nerve damage or even in the COVID-19 pandemic when patients hide their timelines. It can be applied to detect lies through micro expression. In this work will mainly focus on FER. The results of Deep Neural Network (DNN), Convolutional Neural Network (CNN), and Vision Transformer were compared. Human emotion expressions were classified by using facial expression datasets from AffectNet, Tsinghua, Extended Cohn Kanade (CK+), Karolinska Directed Emotional Faces (KDEF) and Real-world Affective Faces (RAF). Finally, all models were evaluated on the testing dataset to confirm their performance. The result shows that Vision Transformer model outperforms other models. © 2023 IEEE.

18.
2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022 ; : 246-252, 2022.
Article in English | Scopus | ID: covidwho-2262319

ABSTRACT

Detecting emotions of the residents during disaster scenario is important for the government agencies to properly take care of its constituents. COVID-19 is a global disaster scenario that has caused unprecedented shutdowns, unemployment, death, and isolation. The behavioral and emotional health impact of COVID-19 is investigated in this study through the use of sentiment analysis and emotion recognition. The dataset is formed by collecting tweets from the seven months before COVID-19 became prevalent in March 2020 and the following seven months after. VADER sentiment analysis method was used to determine if a tweet was positive, negative, or neutral. For emotion recognition, several machine learning algorithms were evaluated and Convolutional Neural Network (CNN) Long-Short Term Memory (LSTM) performed better than the other models. Hence, CNN-LSTM was used to classify the emotion of each tweet as either anger, fear, joy, or sadness. Each tweet has a longitude and latitude stored with it that was geocoded to give the exact location, which was used to compare the states within the USA, and finally compare the USA as a whole with Canada, and Mexico. Sentiment analysis shows that all countries have experienced an increase in negative tweets. Emotion recognition shows that compared to Canada and Mexico, USA has experienced a steep drop in emotional health. © 2022 IEEE.

19.
2022 IEEE International Conference on Big Data, Big Data 2022 ; : 5698-5707, 2022.
Article in English | Scopus | ID: covidwho-2257758

ABSTRACT

The COVID-19 pandemic has caused hate speech on online social networks to become a growing issue in recent years, affecting millions. Our work aims to improve automatic hate speech detection to prevent escalation to hate crimes. The first c hallenge i n h ate s peech r esearch i s t hat e xisting datasets suffer from quite severe class imbalances. The second challenge is the sparsity of information in textual data. The third challenge is the difficulty i n b alancing t he t radeoff b etween utilizing semantic similarity and noisy network language. To combat these challenges, we establish a framework for automatic short text data augmentation by using a semi-supervised hybrid of Substitution Based Augmentation and Dynamic Query Expansion (DQE), which we refer to as SubDQE, to extract more data points from a specific c lass f rom T witter. W e a lso p ropose the HateNet model, which has two main components, a Graph Convolutional Network and a Weighted Drop-Edge. First, we propose a Graph Convolutional Network (GCN) classifier, using a graph constructed from the thresholded cosine similarities between tweet embeddings to provide new insights into how ideas are connected. Second, we propose a weighted Drop-Edge based stochastic regularization technique, which removes edges randomly based on weighted probabilities assigned by the semantic similarities between Tweets. Using 3 different SubDQE-augmented datasets, we compare our HateNet model using eight different tweet embedding methods, six other baseline classification models, and seven other baseline data augmentation techniques previously used in the realm of hate speech detection. Our results show that our proposed HateNet model matches or exceeds the performance of the baseline models, as indicated by the accuracy and F1 score. © 2022 IEEE.

20.
International Journal of Stroke ; 18(1 Supplement):61-62, 2023.
Article in English | EMBASE | ID: covidwho-2254349

ABSTRACT

Introduction: Over 50% of stroke survivors have cognitive impairment. National guidelines promote early cognitive testing however, current penand- paper based tests are not always appropriate, typically take place in hospital and are time costly for busy clinicians. This project aimed to create an easy-to-use cognitive assessment tool specifically designed for the needs of stroke survivors. We used a computerised doctor utilising automatic speech recognition and machine learning. Method(s): Patients were approached if they pass the eligibility criteria of having recent acute stroke/TIA, and do not have pre-existing medical condition i.e dementia, severe aphasia or too medically unwell to complete the assessment. Participants completed the computerised doctor or "CognoSpeak" on the ward using a tablet or at home via a web-version (on home computer or tablet). The assessment included the GAD and PHQ9. All had standard cognitive assessment done with the Montreal Cognitive Assessment (MOCA). Result(s): Recruitment started on 8th December 2020 and is on-going. 951 people were screened and 104 were recruited. 49 have completed baseline Cognospeak, 8 have withdrawn and 3 have died. The mean NIHSS was 3.8 and mean MoCA of 23.9, 31 were female. Participants had a mean education level of 17 years. Conclusion(s): Preliminary data will be presented highlighting feasibility of an automated cognitive and mood assessment that can be completed at home and on the Hyper-acute Stroke Unit. Screening was adapted due to Covid pandemic and utilising remote consent and participation allowed the project to continue.

SELECTION OF CITATIONS
SEARCH DETAIL